Masoomeh Estaji; Reza Morad Sahraee; Monire Shahbaz
Abstract
The present study investigated repair fluency, one of the three main aspects of fluency in Skehan’s model (2003; 2009), across different proficiency levels using the giver test-taker data in a speaking test. There are three main aspects of fluency in the model: Speed fluency, breakdown fluency, ...
Read More
The present study investigated repair fluency, one of the three main aspects of fluency in Skehan’s model (2003; 2009), across different proficiency levels using the giver test-taker data in a speaking test. There are three main aspects of fluency in the model: Speed fluency, breakdown fluency, and repair fluency. To date, there is no research aiming to measure the speech fluency of foreign Persian test takers. Therefore, this study sought to achieve an appropriate rating scale for speech fluency. The study, in particular, has examined some parts of the repair fluency construct in the mentioned model, specifically repetition and false starts, and attempted to determine the extent to which repetitions and false starts varied across different Persian proficiency levels of foreign Persian test-takers. This study transcribed, rated, and labeled repetitions and false starts of 23 foreign Persian language learners taking a speaking test. The spoken data were classified into four proficiency levels according to the Persian Teaching Standard Reference (PTSR), namely pre-intermediate, intermediate, upper-intermediate, and advanced. The duration of recordings was 44 minutes in total (11 minutes for each level). The results were analyzed, using the Kruskal-Wallis test, and revealed that false starts and repetitions did not make a distinction across Persian proficiency levels. The findings imply that raters and rating scale designers do not need to consider repetitions and false starts for identifying or describing different Persian proficiency levels.
Masoomeh Estaji
Abstract
Classroom assessment, as the complementary part of the language learning process, is a powerful decision-making instrument. Nonetheless, more research is required regarding the ways teachers cope with these requirements, and how they affect their pedagogical practices. This study is an attempt to examine ...
Read More
Classroom assessment, as the complementary part of the language learning process, is a powerful decision-making instrument. Nonetheless, more research is required regarding the ways teachers cope with these requirements, and how they affect their pedagogical practices. This study is an attempt to examine EFL teachers’ perceptions of assessment literacy and the criteria they consider to assess their students. Moreover, it examines whether graduate and undergraduate teachers differ in terms of their assessment literacy. To this end, by using a survey and Ex-Post Facto research design and through a two-part questionnaire on assessment literacy (adapted from Plake, 1993; Plake, Impara, & Fager, 1993), a comparison was made between undergraduate (N=22) and graduate teachers (N=10) of English as a Foreign Language (EFL), English Literature, Translation studies, and Linguistics selected through purposive sampling on their perceptions of assessment literacy. The research results revealed a statistically significant difference between undergraduate and graduate teachers’ perceptions of assessment literacy. Results also showed that graduate teachers had higher perceptions of assessment literacy than their undergraduate counterparts, representing the effect of their level of education and educational background. Thus, this study highlights the significance of giving sufficient and proper training to all soon-to-be language teachers on language assessment, argues for the need, suggests ways by which teachers can become more literate in the domain of language assessment, and presents ways teacher educators and language testing experts can assist in this path.
Masoomeh Estaji; Negar Babanezhad Kafshgar
Abstract
The current study was aimed at exploring Differential Item Functioning (DIF) items in Iranian TEFL MA Entrance Exam employing two beneficial and valuable statistical methods: Logistic Regression (LR) and Mantel-Haenszel (MH). Besides, the founded DIF items were gone through a content analysis in order ...
Read More
The current study was aimed at exploring Differential Item Functioning (DIF) items in Iranian TEFL MA Entrance Exam employing two beneficial and valuable statistical methods: Logistic Regression (LR) and Mantel-Haenszel (MH). Besides, the founded DIF items were gone through a content analysis in order to explore the potential linguistic resources of such biases. To this end, the answer sheets of 2217 female and 735 male examinees in 2015 were analyzed to find items containing DIF. The findings of LR technique determined eight items as DIF containing items. Half of the items were advantageous to the men and the other half of the items favoured women. MH procedure explored eleven items as DIF flagging items. Out of these items, six items favoured male test takers and five items showed tendency toward female test takers. No particular linguistic source for such deviated behaviour of items was proposed through the content analysis of the DIF items.